19 research outputs found

    Inventing episodic memory : a theory of dorsal and ventral hippocampus

    Get PDF

    Aftereffects in the perception of emotion following brief, masked adaptor faces

    Get PDF
    Adaptation aftereffects are the tendency to perceive an ambiguous target stimulus, which follows an adaptor stimulus, as different from the adaptor. A duration dependence of face adaptation aftereffects has been demonstrated for durations of at least 500ms, for identity related judgments. Here we describe the duration dependence of the adaptation aftereffects of very brief (11.7ms-500ms) backwardly masked faces, on both expression and identity category judgments of ambiguous target faces. We find significant aftereffects at minimum duration 23.5ms for emotional expression, and 47ms for identity, but these are abolished by backward masking with an inverted face, although these same adaptors can be correctly categorized above chance. The presence of a short duration adaptation effect in expression might be mediated by rapid transfer of low spatial frequency (LSF) information. We tested this possibility by comparing aftereffects in low pass and high pass filtered ambiguous targets, and found no evidence of independent adaptation of a LSF specific channel

    Modelling Face Memory Reveals Task-generalizable Representations

    Get PDF
    Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face’s similarity to the individual participant’s memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms

    Dynamic Construction of Reduced Representations in the Brain for Perceptual Decision Behavior

    Get PDF
    Summary: Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14], where visual categorizations unfold over the first 250 ms of processing [15, 16, 17, 18, 19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations—e.g. categorizing the same object as “a car” or “a Porsche.” While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm

    Different computations over the same inputs produce selective behavior in algorithmic brain networks

    Get PDF
    A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors

    The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres

    Get PDF
    A key to understanding visual cognition is to determine where, when and how brain responses reflect the processing of the specific visual features that modulate categorization behavior - the what. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contra-lateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features

    Using online screening in the general population to detect participants at clinical high-risk for psychosis

    Get PDF
    Introduction: Identification of participants at clinical high-risk (CHR) for the development of psychosis is an important objective of current preventive efforts in mental health research. However, the utility of using web-based screening approaches to detect CHR participants at the population level has not been investigated. Methods: We tested a web-based screening approach to identify CHR individuals. Potential participants were invited to a website via e-mail invitations, flyers, and invitation letters involving both the general population and mental health services. Two thousand two hundred seventy-nine participants completed the 16-item version of the prodromal questionnaire (PQ-16) and a 9-item questionnaire of perceptual and cognitive aberrations (PCA) for the assessment of basic symptoms (BS) online. 52.3% of participants met a priori cut-off criteria for the PQ and 73.6% for PCA items online. One thousand seven hundred eighty-seven participants were invited for a clinical interview and n = 356 interviews were conducted (response rate: 19.9%) using the Comprehensive Assessment of At-Risk Mental State (CAARMS) and the Schizophrenia Proneness Interview, Adult Version (SPI-A). n = 101 CHR participants and n = 8 first-episode psychosis (FEP) were detected. ROC curve analysis revealed good to moderate sensitivity and specificity for predicting CHR status based on online results for both UHR and BS criteria (sensitivity/specificity: PQ-16 = 82%/46%; PCA = 94%/12%). Selection of a subset of 10 items from both PQ-16 and PCA lead to an improved of specificity of 57% while only marginally affecting sensitivity (81%). CHR participants were characterized by similar levels of functioning and neurocognitive deficits as clinically identified CHR groups. Conclusion: These data provide evidence for the possibility to identify CHR participants through population-based web screening. This could be an important strategy for early intervention and diagnosis of psychotic disorders

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Eye coding mechanisms in early human face event-related potentials

    Get PDF
    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reversecorrelation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye.</p
    corecore